Video Person Re-Identification Using Attribute-Enhanced Features

نویسندگان

چکیده

In this work we propose to boost video-based person re-identification (Re-ID) by using attribute-enhanced feature presentation. To end, not only try use the ID-relevant attributes more effectively, but also for first time in literature harness ID-irrelevant help model training. The former mainly include gender, age, clothing characteristics, etc., which contain rich and supplementary information about pedestrian; latter viewpoint, action, are seldom used identification previously. particular, enhance significant areas of image with a novel Attribute Salient Region Enhance (ASRE) module that can attend accurately body pedestrian, so as better separate target from background. Furthermore, find many subject-relevant factors, like view angle movement have great impact on two-dimensional appearance pedestrian. We then exploit both via triplet loss called Viewpoint Action-Invariant (VAI) loss. Based above, design an Salience Assisted Network (ASA-Net) perform attribute recognition along identity recognition, enhancement hard sample mining. Extensive experiments MARS DukeMTMC-VideoReID datasets show our method outperforms state-of-the-arts. Also, visualizations learning results further prove effectiveness proposed method.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Person re-identification by unsupervised video matching

Most existing person re-identification (ReID) methods rely only on the spatial appearance information from either one or multiple person images, whilst ignore the space-time cues readily available in video or imagesequence data. Moreover, they often assume the availability of exhaustively labelled cross-view pairwise data for every camera pair, making them non-scalable to ReID applications in r...

متن کامل

Person Re-identification by Video Ranking

Current person re-identification (re-id) methods typically rely on single-frame imagery features, and ignore space-time information from image sequences. Single-frame (single-shot) visual appearance matching is inherently limited for person re-id in public spaces due to visual ambiguity arising from non-overlapping camera views where viewpoint and lighting changes can cause significant appearan...

متن کامل

Person Re-identification: What Features Are Important?

State-of-the-art person re-identification methods seek robust person matching through combining various feature types. Often, these features are implicitly assigned with a single vector of global weights, which are assumed to be universally good for all individuals, independent to their different appearances. In this study, we show that certain features play more important role than others unde...

متن کامل

Saliency Weighted Features for Person Re-identification

In this work we propose a novel person re-identification approach. The solution, inspired by human gazing capabilities, wants to identify the salient regions of a given person. Such regions are used as a weighting tool in the image feature extraction process. Then, such novel representation is combined with a set of other visual features in a pairwise-based multiple metric learning framework. F...

متن کامل

Deep-Person: Learning Discriminative Deep Features for Person Re-Identification

Recently, many methods of person re-identification (ReID) rely on part-based feature representation to learn a discriminative pedestrian descriptor. However, the spatial context between these parts is ignored for the independent extractor on each separate part. In this paper, we propose to apply Long Short-Term Memory (LSTM) in an end-to-end way to model the pedestrian, seen as a sequence of bo...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Circuits and Systems for Video Technology

سال: 2022

ISSN: ['1051-8215', '1558-2205']

DOI: https://doi.org/10.1109/tcsvt.2022.3189027